首页> 外文OA文献 >Membership Inference Attacks against Machine Learning Models
【2h】

Membership Inference Attacks against Machine Learning Models

机译:针对机器学习模型的成员资格推断攻击

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

We quantitatively investigate how machine learning models leak informationabout the individual data records on which they were trained. We focus on thebasic membership inference attack: given a data record and black-box access toa model, determine if the record was in the model's training dataset. Toperform membership inference against a target model, we make adversarial use ofmachine learning and train our own inference model to recognize differences inthe target model's predictions on the inputs that it trained on versus theinputs that it did not train on. We empirically evaluate our inference techniques on classification modelstrained by commercial "machine learning as a service" providers such as Googleand Amazon. Using realistic datasets and classification tasks, including ahospital discharge dataset whose membership is sensitive from the privacyperspective, we show that these models can be vulnerable to membershipinference attacks. We then investigate the factors that influence this leakageand evaluate mitigation strategies.
机译:我们定量研究机器学习模型如何泄漏有关对其进行训练的单个数据记录的信息。我们将重点放在基本成员资格推断攻击上:给定数据记录和对模型的黑匣子访问,确定记录是否在模型的训练数据集中。通过对目标模型进行Toperform成员推理,我们可以对抗性地使用机器学习并训练自己的推理模型,以识别目标模型在其训练的输入与未训练的输入之间的预测差异。我们对由商业“机器学习即服务”提供商(例如Google和Amazon)训练的分类模型进行经验评估,以评估我们的推理技术。使用现实的数据集和分类任务,包括从隐私角度看其成员身份敏感的医院出院数据集,我们证明了这些模型可能容易受到成员推断攻击。然后,我们调查影响此泄漏的因素并评估缓解策略。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号